48 research outputs found

    Symmetric Tensor Decomposition by an Iterative Eigendecomposition Algorithm

    Get PDF
    We present an iterative algorithm, called the symmetric tensor eigen-rank-one iterative decomposition (STEROID), for decomposing a symmetric tensor into a real linear combination of symmetric rank-1 unit-norm outer factors using only eigendecompositions and least-squares fitting. Originally designed for a symmetric tensor with an order being a power of two, STEROID is shown to be applicable to any order through an innovative tensor embedding technique. Numerical examples demonstrate the high efficiency and accuracy of the proposed scheme even for large scale problems. Furthermore, we show how STEROID readily solves a problem in nonlinear block-structured system identification and nonlinear state-space identification

    A constructive arbitrary-degree Kronecker product decomposition of tensors

    Get PDF
    We propose the tensor Kronecker product singular value decomposition~(TKPSVD) that decomposes a real kk-way tensor A\mathcal{A} into a linear combination of tensor Kronecker products with an arbitrary number of dd factors A=βˆ‘j=1RΟƒj Aj(d)βŠ—β‹―βŠ—Aj(1)\mathcal{A} = \sum_{j=1}^R \sigma_j\, \mathcal{A}^{(d)}_j \otimes \cdots \otimes \mathcal{A}^{(1)}_j. We generalize the matrix Kronecker product to tensors such that each factor Aj(i)\mathcal{A}^{(i)}_j in the TKPSVD is a kk-way tensor. The algorithm relies on reshaping and permuting the original tensor into a dd-way tensor, after which a polyadic decomposition with orthogonal rank-1 terms is computed. We prove that for many different structured tensors, the Kronecker product factors Aj(1),…,Aj(d)\mathcal{A}^{(1)}_j,\ldots,\mathcal{A}^{(d)}_j are guaranteed to inherit this structure. In addition, we introduce the new notion of general symmetric tensors, which includes many different structures such as symmetric, persymmetric, centrosymmetric, Toeplitz and Hankel tensors.Comment: Rewrote the paper completely and generalized everything to tensor

    A Constructive Algorithm for Decomposing a Tensor into a Finite Sum of Orthonormal Rank-1 Terms

    Get PDF
    We propose a constructive algorithm that decomposes an arbitrary real tensor into a finite sum of orthonormal rank-1 outer products. The algorithm, named TTr1SVD, works by converting the tensor into a tensor-train rank-1 (TTr1) series via the singular value decomposition (SVD). TTr1SVD naturally generalizes the SVD to the tensor regime with properties such as uniqueness for a fixed order of indices, orthogonal rank-1 outer product terms, and easy truncation error quantification. Using an outer product column table it also allows, for the first time, a complete characterization of all tensors orthogonal with the original tensor. Incidentally, this leads to a strikingly simple constructive proof showing that the maximum rank of a real 2Γ—2Γ—22 \times 2 \times 2 tensor over the real field is 3. We also derive a conversion of the TTr1 decomposition into a Tucker decomposition with a sparse core tensor. Numerical examples illustrate each of the favorable properties of the TTr1 decomposition.Comment: Added subsection on orthogonal complement tensors. Added constructive proof of maximal CP-rank of a 2x2x2 tensor. Added perturbation of singular values result. Added conversion of the TTr1 decomposition to the Tucker decomposition. Added example that demonstrates how the rank behaves when subtracting rank-1 terms. Added example with exponential decaying singular value

    Tensor Network alternating linear scheme for MIMO Volterra system identification

    Full text link
    This article introduces two Tensor Network-based iterative algorithms for the identification of high-order discrete-time nonlinear multiple-input multiple-output (MIMO) Volterra systems. The system identification problem is rewritten in terms of a Volterra tensor, which is never explicitly constructed, thus avoiding the curse of dimensionality. It is shown how each iteration of the two identification algorithms involves solving a linear system of low computational complexity. The proposed algorithms are guaranteed to monotonically converge and numerical stability is ensured through the use of orthogonal matrix factorizations. The performance and accuracy of the two identification algorithms are illustrated by numerical experiments, where accurate degree-10 MIMO Volterra models are identified in about 1 second in Matlab on a standard desktop pc

    Quantized Fourier and Polynomial Features for more Expressive Tensor Network Models

    Full text link
    In the context of kernel machines, polynomial and Fourier features are commonly used to provide a nonlinear extension to linear models by mapping the data to a higher-dimensional space. Unless one considers the dual formulation of the learning problem, which renders exact large-scale learning unfeasible, the exponential increase of model parameters in the dimensionality of the data caused by their tensor-product structure prohibits to tackle high-dimensional problems. One of the possible approaches to circumvent this exponential scaling is to exploit the tensor structure present in the features by constraining the model weights to be an underparametrized tensor network. In this paper we quantize, i.e. further tensorize, polynomial and Fourier features. Based on this feature quantization we propose to quantize the associated model weights, yielding quantized models. We show that, for the same number of model parameters, the resulting quantized models have a higher bound on the VC-dimension as opposed to their non-quantized counterparts, at no additional computational cost while learning from identical features. We verify experimentally how this additional tensorization regularizes the learning problem by prioritizing the most salient features in the data and how it provides models with increased generalization capabilities. We finally benchmark our approach on large regression task, achieving state-of-the-art results on a laptop computer
    corecore